Building a Multimodal Laughter Database for Emotion Recognition

نویسندگان

  • Merlin Suarez
  • Jocelynn Cu
  • Madelene Sta. Maria
چکیده

Laughter is a significant paralinguistic cue that is largely ignored in multimodal affect analysis. In this work, we investigate how a multimodal laughter corpus can be constructed and annotated both with discrete and dimensional labels of emotions for acted and spontaneous laughter. Professional actors enacted emotions to produce acted clips, while spontaneous laughter was collected from volunteers. Experts annotated acted laughter clips, while volunteers who possess an acceptable empathic quotient score annotated spontaneous laughter clips. The data was pre-processed to remove noise from the environment, and then manually segmented starting from the onset of the expression until its offset. Our findings indicate that laughter carries distinct emotions, and that emotion in laughter is best recognized using audio information rather than facial information. This may be explained by emotion regulation, i.e. laughter is used to suppress or regulate certain emotions. Furthermore, contextual information plays a crucial role in understanding the kind of laughter and emotion in the enactment.

برای دانلود رایگان متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

A Database for Automatic Persian Speech Emotion Recognition: Collection, Processing and Evaluation

Abstract   Recent developments in robotics automation have motivated researchers to improve the efficiency of interactive systems by making a natural man-machine interaction. Since speech is the most popular method of communication, recognizing human emotions from speech signal becomes a challenging research topic known as Speech Emotion Recognition (SER). In this study, we propose a Persian em...

متن کامل

Amplifying a Sense of Emotion toward Drama-Long Short-Term Memory Recurrent Neural Network for dynamic emotion recognition

This paper tried to amplify a sense of emotion toward drama. Using Long Short-Term Memory Recurrent Neural Network to model and predict dynamic emotion(Arousal and Valence) recognition. After building model, we transplant whole framework and take results from it on visualizing. We have two demo version: RGB version and Vignette version. RGB version is to modulate the RGB value of frame in video...

متن کامل

The eNTERFACE'05 Audio-Visual Emotion Database

This paper presents an audio-visual emotion database that can be used as a reference database for testing and evaluating video, audio or joint audio-visual emotion recognition algorithms. Additional uses may include the evaluation of algorithms performing other multimodal signal processing tasks, such as multimodal person identification or audio-visual speech recognition. This paper presents th...

متن کامل

Analysis and recoding of multimodal data

Emotions are part of our lives. Emotions can enhance the meaning of our communication. However, communication with computers is still done by keyboard and mouse. In this humancomputer interaction there is no room for emotions, whereas if we would communicate with machines the way we do in face-to-face communication much information can be extracted from the context and emotion of the speaker. W...

متن کامل

MEC 2016: The Multimodal Emotion Recognition Challenge of CCPR 2016

Emotion recognition is a significant research filed of pattern recog‐ nition and artificial intelligence. The Multimodal Emotion Recognition Challenge (MEC) is a part of the 2016 Chinese Conference on Pattern Recognition (CCPR). The goal of this competition is to compare multimedia processing and machine learning methods for multimodal emotion recognition. The challenge also aims to provide a c...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

عنوان ژورنال:

دوره   شماره 

صفحات  -

تاریخ انتشار 2012